33 research outputs found

    A benchmark for epithelial cell tracking

    Get PDF
    Segmentation and tracking of epithelial cells in light microscopy (LM) movies of developing tissue is an abundant task in cell- and developmental biology. Epithelial cells are densely packed cells that form a honeycomb-like grid. This dense packing distinguishes membrane-stained epithelial cells from the types of objects recent cell tracking benchmarks have focused on, like cell nuclei and freely moving individual cells. While semi-automated tools for segmentation and tracking of epithelial cells are available to biologists, common tools rely on classical watershed based segmentation and engineered tracking heuristics, and entail a tedious phase of manual curation. However, a different kind of densely packed cell imagery has become a focus of recent computer vision research, namely electron microscopy (EM) images of neurons. In this work we explore the benefits of two recent neuron EM segmentation methods for epithelial cell tracking in light microscopy. In particular we adapt two different deep learning approaches for neuron segmentation, namely Flood Filling Networks and MALA, to epithelial cell tracking. We benchmark these on a dataset of eight movies with up to 200 frames. We compare to Moral Lineage Tracing, a combinatorial optimization approach that recently claimed state of the art results for epithelial cell tracking. Furthermore, we compare to Tissue Analyzer, an off-the-shelf tool used by Biologists that serves as our baseline

    Objective comparison of particle tracking methods

    Get PDF
    Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers

    3-D Quantification of Filopodia in Motile Cancer Cells

    No full text

    Automatic Fusion of Segmentation and Tracking Labels.

    No full text
    Labeled training images of high quality are required for developing well-working analysis pipelines. This is, of course, also true for biological image data, where such labels are usually hard to get. We distinguish human labels (gold corpora) and labels generated by computer algorithms (silver corpora). A naturally arising problem is to merge multiple corpora into larger bodies of labeled training datasets. While fusion of labels in static images is already an established field, dealing with labels in time-lapse image data remains to be explored. Obtaining a gold corpus for segmentation is usually very time-consuming and hence expensive. For this reason, gold corpora for object tracking often use object detection markers instead of dense segmentations. If dense segmentations of tracked objects are desired later on, an automatic merge of the detection-based gold corpus with (silver) corpora of the individual time points for segmentation will be necessary. Here we present such an automatic merging system and demonstrate its utility on corpora from the Cell Tracking Challenge. We additionally release all label fusion algorithms as freely available and open plugins for Fiji (https://github.com/xulman/CTC-FijiPlugins)
    corecore